403 research outputs found

    Editor’s Introduction

    Get PDF

    Two Questions on Continuous Mappings

    Get PDF
    In this paper, it is shown that a mapping from a sequential space is continuous iff it is sequentially continuous, which improves a result by relaxing first-countability of domains to sequentiality. An example is also given to show that open mappings do not imply Darboux-mappings, which answers a question posed by Wang and Yang

    Detangling the Interrelationships Between Self-Regulation and Ill-Structured Problem Solving in Problem-Based Learning

    Get PDF
    One of the goals for problem-based learning (PBL) is to promote self-regulation. Although self-regulation has been studied extensively, its interrelationships with ill-structured problem solving have been unclear. In order to clarify the interrelationships, this article proposes a conceptual framework illustrating the iterative processes among problem-solving stages (i.e., problem representation and solution generation) and self-regulation phases (i.e., planning, execution, and reflection). The dynamics of the interrelationships are further illustrated with three ill-structured problem-solving examples in different domains (i.e., information problem solving, historical inquiry, and science inquiry). The proposed framework contributes to research and practice by providing a new lens to examine self-regulation in ill-structured problem solving and offering guidelines to design effective tools and strategies to scaffold and assess PBL

    Scaffolding Novice Instructional Designers' Problem-Solving Processes Using Question Prompts in a Web-Based Learning Environment

    Get PDF
    The present study investigated the effects of question prompts in scaffolding novice instructional designers solving ill-structured, instructional design problems in a Web-based learning environment. The effects of question prompts were studied under different prompting conditions (Question-Elaboration vs. Question-Guidance), taking into consideration various levels of learners' prior knowledge and experience. The study employed a comparative, multiple-case study design using the technique of think-aloud protocols, which were followed by interviews. Eight graduate students from the program of Instructional Design and Technology participated in the study. While the qualitative findings supported the previous research on the advantages of question prompts in scaffolding ill-structured problem solving, they also shed light on the specific cognitive and metacognitive functions, as well as limitations, of question prompts in different conditions. The study has implications for designing instructional scaffolds for supporting ill-structured problem solving of various domains in a Web-based learning environment.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline

    In-context Autoencoder for Context Compression in a Large Language Model

    Full text link
    We propose the In-context Autoencoder (ICAE) for context compression in a large language model (LLM). The ICAE has two modules: a learnable encoder adapted with LoRA from an LLM for compressing a long context into a limited number of memory slots, and a fixed decoder which is the target LLM that can condition on the memory slots for various purposes. We first pretrain the ICAE using both autoencoding and language modeling objectives on massive text data, enabling it to generate memory slots that accurately and comprehensively represent the original context. Then, we fine-tune the pretrained ICAE on a small amount of instruct data to enhance its interaction with various prompts for producing desirable responses. Our experimental results demonstrate that the ICAE learned with our proposed pretraining and fine-tuning paradigm can effectively produce memory slots with 4Ă—4\times context compression, which can be well conditioned on by the target LLM to respond to various prompts. The promising results demonstrate significant implications of the ICAE for its novel approach to the long context problem and its potential to reduce computation and memory overheads for LLM inference in practice, suggesting further research effort in context management for an LLM. Our code and data will be released shortly.Comment: Work in progres
    • …
    corecore